615 research outputs found

    The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations

    Get PDF
    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative

    Marketing data: Has the rise of impact factor led to the fall of objective language in the scientific article?

    Get PDF
    The language of science should be objective and detached and should place data in the appropriate context. The aim of this commentary was to explore the notion that recent trends in the use of language have led to a loss of objectivity in the presentation of scientific data. The relationship between the value-laden vocabulary and impact factor among fundamental biomedical research and clinical journals has been explored. It appears that fundamental research journals of high impact factors have experienced a rise in value-laden terms in the past 25 years

    Inflated Impact Factors? The True Impact of Evolutionary Papers in Non-Evolutionary Journals

    Get PDF
    Amongst the numerous problems associated with the use of impact factors as a measure of quality are the systematic differences in impact factors that exist among scientific fields. While in theory this can be circumvented by limiting comparisons to journals within the same field, for a diverse and multidisciplinary field like evolutionary biology, in which the majority of papers are published in journals that publish both evolutionary and non-evolutionary papers, this is impossible. However, a journal's overall impact factor may well be a poor predictor for the impact of its evolutionary papers. The extremely high impact factors of some multidisciplinary journals, for example, are by many believed to be driven mostly by publications from other fields. Despite plenty of speculation, however, we know as yet very little about the true impact of evolutionary papers in journals not specifically classified as evolutionary. Here I present, for a wide range of journals, an analysis of the number of evolutionary papers they publish and their average impact. I show that there are large differences in impact among evolutionary and non-evolutionary papers within journals; while the impact of evolutionary papers published in multidisciplinary journals is substantially overestimated by their overall impact factor, the impact of evolutionary papers in many of the more specialized, non-evolutionary journals is significantly underestimated. This suggests that, for evolutionary biologists, publishing in high-impact multidisciplinary journals should not receive as much weight as it does now, while evolutionary papers in more narrowly defined journals are currently undervalued. Importantly, however, their ranking remains largely unaffected. While journal impact factors may thus indeed provide a meaningful qualitative measure of impact, a fair quantitative comparison requires a more sophisticated journal classification system, together with multiple field-specific impact statistics per journal

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    The information sources and journals consulted or read by UK paediatricians to inform their clinical practice and those which they consider important: a questionnaire survey

    Get PDF
    Background: Implementation of health research findings is important for medicine to be evidence-based. Previous studies have found variation in the information sources thought to be of greatest importance to clinicians but publication in peer-reviewed journals is the traditional route for dissemination of research findings. There is debate about whether the impact made on clinicians should be considered as part of the evaluation of research outputs. We aimed to determine first which information sources are generally most consulted by paediatricians to inform their clinical practice, and which sources they considered most important, and second, how many and which peer-reviewed journals they read. Methods: We enquired, by questionnaire survey, about the information sources and academic journals that UK medical paediatric specialists generally consulted, attended or read and considered important to their clinical practice. Results: The same three information sources – professional meetings & conferences, peerreviewed journals and medical colleagues – were, overall, the most consulted or attended and ranked the most important. No one information source was found to be of greatest importance to all groups of paediatricians. Journals were widely read by all groups, but the proportion ranking them first in importance as an information source ranged from 10% to 46%. The number of journals read varied between the groups, but Archives of Disease in Childhood and BMJ were the most read journals in all groups. Six out of the seven journals previously identified as containing best paediatric evidence are the most widely read overall by UK paediatricians, however, only the two most prominent are widely read by those based in the community. Conclusion: No one information source is dominant, therefore a variety of approaches to Continuing Professional Development and the dissemination of research findings to paediatricians should be used. Journals are an important information source. A small number of key ones can be identified and such analysis could provide valuable additional input into the evaluation of clinical research outputs

    A system for success: BMC Systems Biology, a new open access journal

    Get PDF
    BMC Systems Biology is the first open access journal spanning the growing field of systems biology from molecules up to ecosystems. The journal has launched as more and more institutes are founded that are similarly dedicated to this new approach. BMC Systems Biology builds on the ongoing success of the BMC series, providing a venue for all sound research in the systems-level analysis of biology

    What Makes a Great Journal Great in the Sciences? Which Came First, the Chicken or the Egg?

    Get PDF
    The paper is concerned with analysing what makes a great journal great in the sciences, based on quantifiable Research Assessment Measures (RAM). Alternative RAM are discussed, with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Various ISI RAM that are calculated annually or updated daily are defined and analysed, including the classic 2-year impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored - By Even The Authors), Impact Factor Inflation (IFI), and three new RAM, namely Historical Self-citation Threshold Approval Rating (H-STAR), 2 Year Self-citation Threshold Approval Rating (2Y-STAR), and Cited Article Influence (CAI). The RAM data are analysed for the 6 most highly cited journals in 20 highly-varied and well-known ISI categories in the sciences, where the journals are chosen on the basis of 2YIF. The application to these 20 ISI categories could be used as a template for other ISI categories in the sciences and social sciences, and as a benchmark for newer journals in a range of ISI disciplines. In addition to evaluating the 6 most highly cited journals in each of 20 ISI categories, the paper also highlights the similarities and differences in alternative RAM, finds that several RAM capture similar performance characteristics for the most highly cited scientific journals, determines that PI-BETA is not highly correlated with the other RAM, and hence conveys additional information regarding research performance. In order to provide a meta analysis summary of the RAM, which are predominantly ratios, harmonic mean rankings are presented of the 13 RAM for the 6 most highly cited journals in each of the 20 ISI categories. It is shown that emphasizing THE impact factor, specifically the 2-year impact factor, of a journal to the exclusion of other informative RAM can lead to a distorted evaluation of journal performance and influence on different disciplines, especially in view of inflated journal self citations

    Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors

    Get PDF
    In this paper, we assess the bibliometric parameters of 37 Dutch professors in clinical cardiology. Those are the Hirsch index (h-index) based on all papers, the h-index based on first authored papers, the number of papers, the number of citations and the citations per paper. A top 10 for each of the five parameters was compiled. In theory, the same 10 professors might appear in each of these top 10s. Alternatively, each of the 37 professors under assessment could appear one or more times. In practice, we found 22 out of these 37 professors in the 5 top 10s. Thus, there is no golden parameter. In addition, there is too much inhomogeneity in citation characteristics even within a relatively homogeneous group of clinical cardiologists. Therefore, citation analysis should be applied with great care in science policy. This is even more important when different fields of medicine are compared in university medical centres. It may be possible to develop better parameters in the future, but the present ones are simply not good enough. Also, we observed a quite remarkable explosion of publications per author which can, paradoxical as it may sound, probably not be interpreted as an increase in productivity of scientists, but as the effect of an increase in the number of co-authors and the strategic effect of networks

    Citation analysis of orthopaedic literature; 18 major orthopaedic journals compared for Impact Factor and SCImago

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>One of the disadvantages of the Impact Factor (IF) is self-citation. The SCImago Journal Rank (SJR) indicator excludes self-citations and considers the quality, rather than absolute numbers, of citations of a journal by other journals. The present study re-evaluated the influence of self-citation on the 2007 IF for 18 major orthopaedic journals and investigated the difference in ranking between IF and SJR.</p> <p>Methods</p> <p>The journals were analysed for self-citation both overall and divided into a general group (n = 8) and a specialized group (n = 10). Self-cited and self-citing rates, as well as citation densities and IFs corrected for self-citation (cIF), were calculated. The rankings of the 18 journals by IF and by SJR were compared and the absolute difference between these rankings (ΔR) was determined.</p> <p>Results</p> <p>Specialized journals had higher self-citing rates (p = 0.01, Δmedian = 9.50, 95%CI -19.42 to 0.42), higher self-cited rates (p = 0.0004, Δmedian = -10.50, 95%CI -15.28 to -5.72) and greater differences between IF and cIF (p = 0.003, Δmedian = 3.50, 95%CI -6.1 to 13.1). There was no significant correlation between self-citing rate and IF for both groups (general: r = 0.46, p = 0.27; specialized: r = 0.21, p = 0.56). When the difference in ranking between IF and SJR was compared between both groups, sub-specialist journals were ranked lower compared to their general counterparts (ΔR: p = 0.006, Δmedian = 2.0, 95%CI -0.39 to 4.39).</p> <p>Conclusions</p> <p>Citation analysis shows that specialized orthopaedic journals have specific self-citation tendencies. The correlation between self-cited rate and IF in our sample was large but, due to small sample size, not significant. The SJR excludes self-citations in its calculation and therefore enhances the underestimation in ranking of specialized journals.</p

    Scientific Publications on Primary Biliary Cirrhosis from 2000 through 2010: An 11-Year Survey of the Literature

    Get PDF
    BACKGROUND: Primary biliary cirrhosis (PBC) is a chronic liver disease characterized by intrahepatic bile-duct destruction, cholestasis, and fibrosis. It can lead to cirrhosis and eventually liver failure. PBC also shows some regional differences with respect to incidence and prevalence that are becoming more pronounced each year. Recently, researchers have paid more attention to PBC. To evaluate the development of PBC research during the past 11 years, we determined the quantity and quality of articles on this subject. We also compared the contributions of scientists from the US, UK, Japan, Italy, Germany, and China. METHODS: The English-language papers covering PBC published in journals from 2000 through 2010 were retrieved from the PubMed database. We recorded the number of papers published each year, analyzed the publication type, and calculated the accumulated, average impact factors (IFs) and citations from every country. The quantity and quality of articles on PBC were compared by country. We also contrasted the level of PBC research in China and other countries. RESULTS: The total number of articles did not significantly increase during the past 11 years. The number of articles from the US exceeded those from any other country; the publications from the US also had the highest IFs and the most citations. Four other countries showed complex trends with respect to the quantity and quality of articles about PBC. CONCLUSION: The researchers from the US have contributed the most to the development of PBC research. They currently represent the highest level of research. Some high-level studies, such as RCTs, meta-analyses, and in-depth basic studies should be launched. The gap between China and the advanced level is still enormous. Chinese investigators still have a long way to go
    corecore